Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Evaluations—encompassing computational evaluations, benchmarks and user studies—are essential tools for validating the performance and applicability of graph and network layout algorithms (also known as graph drawing). These evaluations not only offer significant insights into an algorithm's performance and capabilities, but also assist the reader in determining if the algorithm is suitable for a specific purpose, such as handling graphs with a high volume of nodes or dense graphs. Unfortunately, there is no standard approach for evaluating layout algorithms. Prior work holds a ‘Wild West’ of diverse benchmark datasets and data characteristics, as well as varied evaluation metrics and ways to report results. It is often difficult to compare layout algorithms without first implementing them and then running your own evaluation. In this systematic review, we delve into the myriad of methodologies employed to conduct evaluations—the utilized techniques, reported outcomes and the pros and cons of choosing one approach over another. Our examination extends beyond computational evaluations, encompassing user‐centric evaluations, thus presenting a comprehensive understanding of algorithm validation. This systematic review—and its accompanying website—guides readers through evaluation types, the types of results reported, and the available benchmark datasets and their data characteristics. Our objective is to provide a valuable resource for readers to understand and effectively apply various evaluation methods for graph layout algorithms. A free copy of this paper and all supplemental material is available at osf.io, and the categorized papers are accessible on our website at https://visdunneright.github.io/gd-comp-eval/more » « less
-
In this paper, we present the Vis Repligogy framework that enables conducting replication studies in the class. Replication studies are crucial to strengthening the data visualization field and ensuring its foundations are solid and methods accurate. Although visualization researchers acknowledge the epistemological significance of replications and their necessity to establish trust and reliability, the field has made little progress to support the publication of such studies and, importantly, provide methods to the community to encourage replications. Therefore, we contribute Vis Repligogy, a novel framework to systematically incorporate replications within visualization course curricula that not only teaches students replication and evaluation methodologies but also results in executed replication studies to validate prior work. To validate the feasibility of the framework, we present case studies of two graduate data visualization courses that implemented it. These courses resulted in a total of five replication studies. Finally, we reflect on our experience implementing the Vis Repligogy framework to provide useful recommendations for future use. We envision this framework will encourage instructors to conduct replications in their courses, help facilitate more replications in visualization pedagogy and in research, and support a culture shift towards reproducible research. Supplemental materials of this paper are available at https://osf.io/ncb6d/.more » « less
-
Timelines are commonly represented on a horizontal line, which is not necessarily the most effective way to visualize temporal event sequences. However, few experiments have evaluated how timeline shape influences task performance. We present the design and results of a controlled experiment run on Amazon Mechanical Turk (n=192) in which we evaluate how timeline shape affects task completion time, correctness, and user preference. We tested 12 combinations of 4 shapes --- horizontal line, vertical line, circle, and spiral — and 3 data types — recurrent, non-recurrent, and mixed event sequences. We found good evidence that timeline shape meaningfully affects user task completion time but not correctness and that users have a strong shape preference. Building on our results, we present design guidelines for creating effective timeline visualizations based on user task and data types. A free copy of this paper, the evaluation stimuli and data, and code are available https://osf.io/qr5yu/more » « less
An official website of the United States government
